Build a Sign Language Recognizer

In this project, you will build a system that can recognize words communicated using the American Sign Language (ASL). You will be provided a preprocessed dataset of tracked hand and nose positions extracted from video. Your goal would be to train a set of Hidden Markov Models (HMMs) using part of this dataset to try and identify individual words from test sequences.

As an optional challenge, you can incorporate Statistical Language Models (SLMs) that capture the conditional probability of particular sequences of words occurring. This will help you improve the recognition accuracy of your system.

Starter code and dataset

Clone this GitHub repository and go through the README to get an overview of the project:

https://github.com/udacity/AIND-Recognizer

Then run the starter notebook using the following command:

jupyter notebook asl_recognizer.ipynb

Parts

In the notebook, you will find 3 required parts that you must complete, and an optional part at the end.

PART 1: Data

PART 2: Model Selection

PART 3: Recognizer

PART 4: (OPTIONAL) Improve the WER with Language Models

Complete each of these parts by implementing segments of code as instructed, and filling out any written responses to questions in markdown cells. Make sure you run each cell and include the output. Your code must be free of errors and needs to meet the specified requirements.

Submission

Once you have completed the project and met all the requirements set in the rubric (see below), please save the notebook as an HTML file. You can do this by going to the File menu in the notebook and choosing "Download as" > HTML. Submit the following files (and only these files) in a .zip archive:

  • asl_recognizer.ipynb
  • asl_recognizer.html
  • my_model_selectors.py
  • my_recognizer.py

Note: Please do not include the data directory as it will lead to a huge .zip file, and may be rejected by our reviews system.

Evaluation

Your submission will be evaluated against this rubric.